8,578 research outputs found

    Learn with SAT to Minimize B\"uchi Automata

    Full text link
    We describe a minimization procedure for nondeterministic B\"uchi automata (NBA). For an automaton A another automaton A_min with the minimal number of states is learned with the help of a SAT-solver. This is done by successively computing automata A' that approximate A in the sense that they accept a given finite set of positive examples and reject a given finite set of negative examples. In the course of the procedure these example sets are successively increased. Thus, our method can be seen as an instance of a generic learning algorithm based on a "minimally adequate teacher" in the sense of Angluin. We use a SAT solver to find an NBA for given sets of positive and negative examples. We use complementation via construction of deterministic parity automata to check candidates computed in this manner for equivalence with A. Failure of equivalence yields new positive or negative examples. Our method proved successful on complete samplings of small automata and of quite some examples of bigger automata. We successfully ran the minimization on over ten thousand automata with mostly up to ten states, including the complements of all possible automata with two states and alphabet size three and discuss results and runtimes; single examples had over 100 states.Comment: In Proceedings GandALF 2012, arXiv:1210.202

    The regulation and structure of nonlife insurance in the United States

    Get PDF
    The insurance industry is underdeveloped in most developing countries because of low levels of income and wealth and because restrictive regulations inhibit the supply of insurance services. But several countries have begun to reform their insurance industries. To help those countries, the authors offer an overview of insurance regulation in the United States - and discuss the economics and market structure of nonlife insurance in entry and exit barriers, economies of scale, and conduct and performance studies. They conclude that the U.S. nonlife insurance industry exhibits low concentration at both national and state market levels. Concentration is low even on a line-by-line basis. The primary concern of regulators has been to protect policyholders from insolvency, but regulation has also often been used to protect the market position of local insurance companies against the entry of out-of-state competitors. Regulation has worked best when based on solvency monitoring, with limited restrictions on entry. It has been more harmful when it involved controls on premiums and products and on the industry's level of profitability. Over the years the industry has shown a remarkable degree of innovation, although it has also faced many serious and persistent problems. The problems include the widespread crisis in liability (including product liability and medical malpractice), the crisis in automobile insurance, the volatility of investment income, the effects of market-driven pricing and underwriting cycles, and the difficulty of measuring insurance solvency. The long-tailed lines of insurance - those that entail long delays in final settlements - are exposed to the vagaries of inflation and rising costs. Two mandatory lines - third party automobile insurance and workers'compensation (for work accidents) - account for nearly 55 percent of premiums. These two lines - plus medical malpractice, other liability, and aircraft insurance - had combined ratios well over 125 percent in 1989. The industry has some ability to collude and to set prices, but seems to be competitive and to earn profits below similarly situated financial firms. Insurance profitability is not consistently above or below normal returns, although earnings for mandatory and strictly regulated lines of automobile insurance and workers'compensation appear to be below-adequate for long-term viability.Insurance&Risk Mitigation,Non Bank Financial Institutions,Insurance Law,Environmental Economics&Policies,Financial Intermediation

    Generalization of form in visual pattern classification.

    Get PDF
    Human observers were trained to criterion in classifying compound Gabor signals with sym- metry relationships, and were then tested with each of 18 blob-only versions of the learning set. General- ization to dark-only and light-only blob versions of the learning signals, as well as to dark-and-light blob versions was found to be excellent, thus implying virtually perfect generalization of the ability to classify mirror-image signals. The hypothesis that the learning signals are internally represented in terms of a 'blob code' with explicit labelling of contrast polarities was tested by predicting observed generalization behaviour in terms of various types of signal representations (pixelwise, Laplacian pyramid, curvature pyramid, ON/OFF, local maxima of Laplacian and curvature operators) and a minimum-distance rule. Most representations could explain generalization for dark-only and light-only blob patterns but not for the high-thresholded versions thereof. This led to the proposal of a structure-oriented blob-code. Whether such a code could be used in conjunction with simple classifiers or should be transformed into a propo- sitional scheme of representation operated upon by a rule-based classification process remains an open question

    Correcting Biases in a lower resolution global circulation model with data assimilation

    Full text link
    With this work, we aim at developping a new method of bias correction using data assimilation. This method is based on the stochastic forcing of a model to correct bias. First, through a preliminary run, we estimate the bias of the model and its possible sources. Then, we establish a forcing term which is directly added inside the model’s equations. We create an ensemble of runs and consider the forcing term as a control variable during the assimilation of observations. We then use this analysed forcing term to correct the bias of the model. Since the forcing is added inside the model, it acts as a source term, unlike external forcings such as wind. This procedure has been developed and successfully tested with a twin experiment on a Lorenz 95 model. It is currently being applied and tested on the sea ice ocean NEMO LIM model, which is used in the PredAntar project. NEMO LIM is a global and low resolution (2 degrees) coupled model (hydrodynamic model and sea ice model) with long time steps allowing simulations over several decades. Due to its low resolution, the model is subject to bias in area where strong currents are present. We aim at correcting this bias by using perturbed current fields from higher resolution models and randomly generated perturbations. The random perturbations need to be constrained in order to respect the physical properties of the ocean, and not create unwanted phenomena. To construct those random perturbations, we first create a random field with the Diva tool (Data-Interpolating Variational Analysis). Using a cost function, this tool penalizes abrupt variations in the field, while using a custom correlation length. It also decouples disconnected areas based on topography. Then, we filter the field to smoothen it and remove small scale variations. We use this field as a random stream function, and take its derivatives to get zonal and meridional velocity fields. We also constrain the stream function along the coasts in order not to have currents perpendicular to the coast. The randomly generated stochastic forcing are then directly injected into the NEMO LIM model’s equations in order to force the model at each timestep, and not only during the assimilation step. Results from a twin experiment will be presented. This method is being applied to a real case, with observations on the sea surface height available from the mean dynamic topography of CNES (Centre national d’études spatiales). The model, the bias correction, and more extensive forcings, in particular with a three dimensional structure and a time-varying component, will also be presented

    Value stream mapping for sustainable change at a Swedish dairy farm

    Get PDF
    This case study increases our understanding of Lean implementation in which value stream mapping (VSM) is used to create an action plan at a small dairy and cattle farm in southwest Sweden. The researchers, the farmer-owner, and farm employees followed a step-by-step approach that resulted in ideas for operational improvements for the dairy activity. Data were collected in interviews with the farmer/owner, researcher participation in workshops, and researcher observations. The results reveal that VSM is an effective way to create a culture of collaboration among the farm staff and to better define their roles and responsibilities as well as improve routines, communications, and task completion. In the two-to-three year period following the VSM project, specific improvements were observed in milk production/quality and animal health. The results also reveal that while Lean principles are relevant given the repetitive nature of agriculture routines and tasks, the VSM element of lead-time reduction is less relevant owing to the unique value adding biological processes in the agriculture sector

    How Digital Divide affects Public E-Services: The Role of Migration Background

    Get PDF
    After the private sector the public sector also tries to benefit from the advantages of electronic service delivery, in particular from lower costs and higher service quality. While more and more services are available electronically, resident behind. But high usage rates and therefore a maximized potential target group, covering major parts of society, are essential prerequisite for successful public e-services. If the residents are not using the newly created electronic services, neither they benefit from better service quality nor do the public service provider save money. Digital divide research can be leveraged to maximize the potential target group of public e-service. For this purpose a focus on public e-services as level of analysis is required, since Internet access or regular Internet usage are necessary but no sufficient conditions for being able to use public e-services. This study employs qualitative research methods in an exploratory case study design to analyze the influence of migration background on the capability to use public e-services. It provides two testable propositions for further confirmatory research: Due to limited language skills and different cultural experiences, for residents with migration background Internet experience does not directly translate into confidence in their own public e-service skills

    Bestandsaufnahme der Segetalflora im FFH-Gebiet Dreienberg bei Friedewald als Basis für ein längerfristiges Monitoring : Abschlussbericht

    Get PDF
    Hintergrund der vorliegenden Bearbeitung war die Überlegung, ob die seitdem durch den Naturschutzbund durchgeführte Pflege der „Naturschutzäcker“ sinnvoll sei oder nicht. Kritische Stimmen bezweifelten dies. Im Einzelnen sollten folgende Fragen geklärt werden: Welche bemerkenswerten Arten sind aktuell vorhanden? Welche Veränderungen des Pflanzenbestandes sind seit Mitte der 80er Jahre feststellbar? Wie kann für zukünftige Vergleichsuntersuchungen eine sinnvolle Datenbasis geschaffen werden? Wie sind die Bemühungen des Naturschutzbundes unter überregionalem Hintergrund zu bewerten? Soll die Pflege der Ackerflächen fortgeführt und vom Biosphärenreservat unterstützt werden

    Which Processes Do Users Not Want Online? - Extending Process Virtualization Theory

    Get PDF
    Following the advent of the Internet more and more processes are provided virtually, i.e., without physical interactions between involved people and objects. For instance, E-Commerce has virtualized shopping processes since products are bought without physical inspection and interaction with sales staff. This study is founded on the key idea of process virtualization theory (PVT) that from the users’ perspective not all processes are equally amenable for virtualization. We investigate characteristics of processes, which are causing users’ resistance toward the virtualized process. Surveying 501 individuals regarding 10 processes, this study constitutes the first quantitative test evaluating the prediction capabilities of PVT by analysis of varying processes. Moreover, it introduces and successfully tests the extended PVT (EPVT), which integrates PVT with multiple, related constructs from extant literature in a unified model with multi-order causal relations. Thereby, it clearly enhances our understanding of human behavior with regard to the frequent phenomenon process virtualization
    • …
    corecore